Unconstrained Optimization of Real Functions in Complex Variables
نویسندگان
چکیده
Nonlinear optimization problems in complex variables are frequently encountered in applied mathematics and engineering applications such as control theory, signal processing and electrical engineering. Optimization of these problems often requires a firstor second-order approximation of the objective function to generate a new step or descent direction. However, such methods cannot be applied to real functions in complex variables because they are necessarily nonanalytic in their argument, i.e., the Taylor series expansion in their argument alone does not exist. To overcome this problem, the objective function is often redefined as a function of the real and imaginary parts of its complex argument so that standard optimization methods can be applied. We show that real functions in complex variables do have a Taylor series expansion in complex variables, which we then use to generalize existing optimization methods for both general nonlinear optimization problems and nonlinear least squares problems. We then apply these methods to a number of case studies which show that complex Taylor expansions can lead to greater insight in the structure of the problem and that this structure can often be exploited to improve computational complexity and storage cost.
منابع مشابه
OPTIMIZATION OF ENDURANCE TIME ACCELERATION FUNCTIONS FOR SEISMIC ASSESSMENT OF STRUCTURES
Numerical simulation of structural response is a challenging issue in earthquake engineering and there has been remarkable progress in this area in the last decade. Endurance Time (ET) method is a new response history based analysis procedure for seismic assessment and structural design in which structures are subjected to a gradually intensifying dynamic excitation and their seismic performanc...
متن کاملA Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems
In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...
متن کاملOn the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization
We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...
متن کاملNonlinear coordinate transformations for unconstrained optimization I. Basic transformations
In this two-part article, nonlinear coordinate transformations are discussed to simplify unconstrained global optimization problems and to test their unimodality on the basis of the analytical structure of the objective functions. If the transformed problems are quadratic in some or all the variables, then the optimum can be calculated directly, without an iterative procedure, or the number of ...
متن کاملAn efficient improvement of the Newton method for solving nonconvex optimization problems
Newton method is one of the most famous numerical methods among the line search methods to minimize functions. It is well known that the search direction and step length play important roles in this class of methods to solve optimization problems. In this investigation, a new modification of the Newton method to solve unconstrained optimization problems is presented. The significant ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- SIAM Journal on Optimization
دوره 22 شماره
صفحات -
تاریخ انتشار 2012